大型深度神经网络的联合培训通常可以受到限制,因为将更新与增加模型大小进行交流的成本增加。在集中设置中设计了各种模型修剪技术,以减少推理时间。将集中的修剪技术与联合培训相结合似乎是降低沟通成本的直观 - 通过在沟通步骤之前修剪模型参数。此外,在培训期间,这种渐进的模型修剪方法也可以减少培训时间/成本。为此,我们提出了FedSparsify,该公司在联合培训期间执行模型修剪。在我们在集中式和联合的设置中对大脑年龄预测任务的实验(估计一个人的年龄从大脑MRI估算),我们证明,即使在具有高度异构数据的高度异质数据的挑战性的联盟学习环境中,也可以将模型最多可修剪高达95%的稀疏性,而不会影响表现。分布。模型修剪的一个令人惊讶的好处是改进的模型隐私。我们证明,具有高稀疏性的模型不太容易受到会员推理攻击的影响,这是一种隐私攻击。
translated by 谷歌翻译
Some of the tightest information-theoretic generalization bounds depend on the average information between the learned hypothesis and a single training example. However, these sample-wise bounds were derived only for expected generalization gap. We show that even for expected squared generalization gap no such sample-wise information-theoretic bounds exist. The same is true for PAC-Bayes and single-draw bounds. Remarkably, PAC-Bayes, single-draw and expected squared generalization gap bounds that depend on information in pairs of examples exist.
translated by 谷歌翻译
模式形成过程中拓扑和微观结构方案中过渡的识别和分类对于理解和制造许多应用领域中的微观结构精确的新型材料至关重要。不幸的是,相关的微观结构过渡可能取决于以微妙而复杂的方式取决于过程参数,而经典相变理论未捕获。尽管有监督的机器学习方法可能对识别过渡制度很有用,但他们需要标签,这些标签需要先验了解订单参数或描述这些过渡的相关结构。由动态系统的通用原理的激励,我们使用一种自我监督的方法来解决使用神经网络从观察到的微观结构中预测过程参数的反问题。这种方法不需要关于不同类别的微观结构模式或预测微观结构过渡的目标任务的预定义的,标记的数据。我们表明,执行逆问题预测任务的困难与发现微观结构制度的目标有关,因为微观结构模式的定性变化与我们自我监督问题的不确定性预测的变化相对应。我们通过在两个不同的模式形成过程中自动发现微观结构方案中的过渡来证明我们的方法的价值:两相混合物的旋律分解以及在薄膜物理蒸气沉积过程中二进制合金浓度调制的形成。这种方法为发现和理解看不见的或难以辨认的过渡制度开辟了一个有希望的途径,并最终用于控制复杂的模式形成过程。
translated by 谷歌翻译
域泛化算法使用来自多个域的培训数据来学习概括到未经识别域的模型。虽然最近提出的基准证明大多数现有算法不优于简单的基线,但建立的评估方法未能暴露各种因素的影响,这有助于性能不佳。在本文中,我们提出了一个域泛化算法的评估框架,其允许将误差分解成组件捕获概念的不同方面。通过基于域不变表示学习的思想的算法的普遍性的启发,我们扩展了评估框架,以捕获在实现不变性时捕获各种类型的失败。我们表明,泛化误差的最大贡献者跨越方法,数据集,正则化强度甚至培训长度各不相同。我们遵守与学习域不变表示的策略相关的两个问题。在彩色的MNIST上,大多数域泛化算法失败,因为它们仅在训练域上达到域名不变性。在Camelyon-17上,域名不变性会降低看不见域的表示质量。我们假设专注于在丰富的代表之上调整分类器可以是有希望的方向。
translated by 谷歌翻译
最近的性能(SOTA)用于图表代表学习(GRL)的性能的改进已经以显着的计算资源要求,例如,用于训练,例如,通过背部计算渐变在许多数据时期。同时,单数值分解(SVD)可以找到闭合形式的解决方案以凸出的问题,仅使用少数时代的时期。在本文中,我们为具有适度硬件的人进行了更多计算贸易。我们设计一个计算\ textit {隐式}定义的矩阵的SVD的框架,并将此框架应用于多个GRL任务。对于每个任务,我们导出了SOTA模型的线性近似,其中我们设计(昂贵 - 存储)矩阵$ \ mathbf {m} $和培训模型,通过$ \ mathbf {m}的svd rend-form,以封闭形式$,无需计算$ \ mathbf {m} $的条目。通过在一个步骤中融合到独特的点,并且在没有计算梯度的情况下,我们的模型在文章引文和生物互动网络等各种图表中显示出具有竞争性的经验测试性能。更重要的是,SVD可以初始化更深入的模型,该模型几乎无处不在地是非线性的,但在其参数驻留在超平面上时,虽然线性地行事,但是在超平面上初始化时,则行为。然后,更深入的模型可以在仅几个时期内进行微调。总的来说,我们的程序比现有技术的方法训练数百次,同时竞争经验测试性能。我们开源我们的实施:https://github.com/samihaija/isvd
translated by 谷歌翻译
从非正规化概率分布的抽样是机器学习中的基本问题,包括贝叶斯建模,潜在因子推断和基于能源的模型训练。在几十年的研究之后,尽管收敛缓慢,但MCMC的变化仍然是抽样的默认方法。辅助神经模型可以学习加速MCMC,但训练额外模型的开销可能是禁止的。我们通过具有非牛顿势头的新的汉密尔顿动态提出了对这个问题的根本不同的方法。与MCMC蒙特卡洛等MCMC接近相比,不需要随机步骤。相反,在扩展状态空间中提出的确定性动态精确地对能量函数指定的目标分布,在ergodicity的假设下。或者,可以将动态解释为在没有训练的情况下对指定的能量模型进行采样的标准化流程。所提出的能量采样哈密尔顿(ESH)动态有一个简单的形式,可以用现有的颂歌解决,但我们推出了一个专业的求解器,它表现出更好的性能。 ESH Dynamics会收敛于其MCMC竞争对手的速度更快,更稳定地培训神经网络能量模型。
translated by 谷歌翻译
联合学习(FL)可以通过各种不同远程数据源的机器学习模型的分布式计算,而无需将任何单独的数据传输到集中位置。这导致改进的模型的完全性,并且随着更多来源和较大的数据集被添加到联合中的计算和计算的有效缩放。然而,最近的成员攻击表明,当模型参数或摘要统计数据与中央站点共享时,有时可以泄露或推断出私有或敏感的个人数据,需要改进的安全解决方案。在这项工作中,我们提出了一种使用全同性全相治(FHE)的安全FL框架。具体而言,我们使用CKKS构造,近似浮点兼容方案,这些方案受益于密文包装和重新扫描。在我们对大型脑MRI数据集的评估中,我们使用建议的安全流动框架来培训深度学习模型,以预测分布式MRI扫描的一个人的年龄,一个共同的基准测试任务,并证明在学习表现中没有降级在加密和非加密的联合模型之间。
translated by 谷歌翻译
Existing popular methods for semi-supervised learning with Graph Neural Networks (such as the Graph Convolutional Network) provably cannot learn a general class of neighborhood mixing relationships. To address this weakness, we propose a new model, MixHop, that can learn these relationships, including difference operators, by repeatedly mixing feature representations of neighbors at various distances. MixHop requires no additional memory or computational complexity, and outperforms on challenging baselines. In addition, we propose sparsity regularization that allows us to visualize how the network prioritizes neighborhood information across different graph datasets. Our analysis of the learned architectures reveals that neighborhood mixing varies per datasets. 1 We use "like", as graph edges are not axis-aligned.
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
There is no settled universal 3D representation for geometry with many alternatives such as point clouds, meshes, implicit functions, and voxels to name a few. In this work, we present a new, compelling alternative for representing shapes using a sequence of cross-sectional closed loops. The loops across all planes form an organizational hierarchy which we leverage for autoregressive shape synthesis and editing. Loops are a non-local description of the underlying shape, as simple loop manipulations (such as shifts) result in significant structural changes to the geometry. This is in contrast to manipulating local primitives such as points in a point cloud or a triangle in a triangle mesh. We further demonstrate that loops are intuitive and natural primitive for analyzing and editing shapes, both computationally and for users.
translated by 谷歌翻译